250 research outputs found

    Functional connectivity in relation to motor performance and recovery after stroke.

    Get PDF
    Plasticity after stroke has traditionally been studied by observing changes only in the spatial distribution and laterality of focal brain activation during affected limb movement. However, neural reorganization is multifaceted and our understanding may be enhanced by examining dynamics of activity within large-scale networks involved in sensorimotor control of the limbs. Here, we review functional connectivity as a promising means of assessing the consequences of a stroke lesion on the transfer of activity within large-scale neural networks. We first provide a brief overview of techniques used to assess functional connectivity in subjects with stroke. Next, we review task-related and resting-state functional connectivity studies that demonstrate a lesion-induced disruption of neural networks, the relationship of the extent of this disruption with motor performance, and the potential for network reorganization in the presence of a stroke lesion. We conclude with suggestions for future research and theories that may enhance the interpretation of changing functional connectivity. Overall findings suggest that a network level assessment provides a useful framework to examine brain reorganization and to potentially better predict behavioral outcomes following stroke

    Non-parametric statistical thresholding for sparse magnetoencephalography source reconstructions.

    Get PDF
    Uncovering brain activity from magnetoencephalography (MEG) data requires solving an ill-posed inverse problem, greatly confounded by noise, interference, and correlated sources. Sparse reconstruction algorithms, such as Champagne, show great promise in that they provide focal brain activations robust to these confounds. In this paper, we address the technical considerations of statistically thresholding brain images obtained from sparse reconstruction algorithms. The source power distribution of sparse algorithms makes this class of algorithms ill-suited to "conventional" techniques. We propose two non-parametric resampling methods hypothesized to be compatible with sparse algorithms. The first adapts the maximal statistic procedure to sparse reconstruction results and the second departs from the maximal statistic, putting forth a less stringent procedure that protects against spurious peaks. Simulated MEG data and three real data sets are utilized to demonstrate the efficacy of the proposed methods. Two sparse algorithms, Champagne and generalized minimum-current estimation (G-MCE), are compared to two non-sparse algorithms, a variant of minimum-norm estimation, sLORETA, and an adaptive beamformer. The results, in general, demonstrate that the already sparse images obtained from Champagne and G-MCE are further thresholded by both proposed statistical thresholding procedures. While non-sparse algorithms are thresholded by the maximal statistic procedure, they are not made sparse. The work presented here is one of the first attempts to address the problem of statistically thresholding sparse reconstructions, and aims to improve upon this already advantageous and powerful class of algorithm

    Learning and adaptation in speech production without a vocal tract

    Get PDF
    How is the complex audiomotor skill of speaking learned? To what extent does it depend on the specific characteristics of the vocal tract? Here, we developed a touchscreen-based speech synthesizer to examine learning of speech production independent of the vocal tract. Participants were trained to reproduce heard vowel targets by reaching to locations on the screen without visual feedback and receiving endpoint vowel sound auditory feedback that depended continuously on touch location. Participants demonstrated learning as evidenced by rapid increases in accuracy and consistency in the production of trained targets. This learning generalized to productions of novel vowel targets. Subsequent to learning, sensorimotor adaptation was observed in response to changes in the location-sound mapping. These findings suggest that participants learned adaptable sensorimotor maps allowing them to produce desired vowel sounds. These results have broad implications for understanding the acquisition of speech motor control.Published versio

    Speech Production as State Feedback Control

    Get PDF
    Spoken language exists because of a remarkable neural process. Inside a speaker's brain, an intended message gives rise to neural signals activating the muscles of the vocal tract. The process is remarkable because these muscles are activated in just the right way that the vocal tract produces sounds a listener understands as the intended message. What is the best approach to understanding the neural substrate of this crucial motor control process? One of the key recent modeling developments in neuroscience has been the use of state feedback control (SFC) theory to explain the role of the CNS in motor control. SFC postulates that the CNS controls motor output by (1) estimating the current dynamic state of the thing (e.g., arm) being controlled, and (2) generating controls based on this estimated state. SFC has successfully predicted a great range of non-speech motor phenomena, but as yet has not received attention in the speech motor control community. Here, we review some of the key characteristics of speech motor control and what they say about the role of the CNS in the process. We then discuss prior efforts to model the role of CNS in speech motor control, and argue that these models have inherent limitations – limitations that are overcome by an SFC model of speech motor control which we describe. We conclude by discussing a plausible neural substrate of our model

    Auditory Cortical Plasticity in Learning to Discriminate Modulation Rate

    Get PDF
    The discrimination of temporal information in acoustic inputs is a crucial aspect of auditory perception, yet very few studies have focused on auditory perceptual learning of timing properties and associated plasticity in adult auditory cortex. Here, we trained participants on a temporal discrimination task. The main task used a base stimulus (four tones separated by intervals of 200 ms) that had to be distinguished from a target stimulus (four tones with intervals down to ∼180 ms). We show that participants' auditory temporal sensitivity improves with a short amount of training (3 d, 1 h/d). Learning to discriminate temporal modulation rates was accompanied by a systematic amplitude increase of the early auditory evoked responses to trained stimuli, as measured by magnetoencephalography. Additionally, learning and auditory cortex plasticity partially generalized to interval discrimination but not to frequency discrimination. Auditory cortex plasticity associated with short-term perceptual learning was manifested as an enhancement of auditory cortical responses to trained acoustic features only in the trained task. Plasticity was also manifested as induced non-phase–locked high gamma-band power increases in inferior frontal cortex during performance in the trained task. Functional plasticity in auditory cortex is here interpreted as the product of bottom-up and top-down modulations

    Correlation of Clinical Neuromusculoskeletal and Central Somatosensory Performance: Variability in Controls and Patients With Severe and Mild Focal Hand Dystonia

    Get PDF
    Focal hand dystonia (FHd) is a recalcitrant, disabling movement disorder, characterized by involuntary co-contractions of agonists and antagonists, that can develop in patients who overuse or misuse their hands. The aim of this study was to document clinical neuromusculoskeletal performance and somatosensory responses (magnetoencephalography) in healthy controls and in FHd subjects with mild versus severe hand dystonia. The performance of healthy subjects (n = 17) was significantly better than that of FHd subjects (n = 17) on all clinical parameters. Those with mild dystonia (n = 10) demonstrated better musculoskeletal skills, task-specific motor performance, and sensory discrimination, but the performance of sensory and fine motor tasks was slower than that of patients with severe dystonia. In terms of somatosensory evoked field responses (SEFs), FHd subjects demonstrated a significant difference in the location of the hand representation on the x and y axes, lower amplitude of SEFs integrated across latency, and a higher ratio of mean SEF amplitude to latency than the controls. Bilaterally,. those with FHd (mild and severe) lacked progressive sequencing of the digits from inferior to superior. On the affected digits, subjects with severe dystonia had a significantly higher ratio of SEF amplitude to latency and a significantly smaller mean volume of the cortical hand representation than those with mild dystonia. Severity of dystonia positively correlated with the ratio of SEF mean amplitude to latency (0.9029 affected, 0.8477 unaffected; p<0.01). The results of the present study strengthen the evidence that patients with FHd demonstrate signs of somatosensory degradation of the hand that correlates with clinical sensorimotor dysfunction, with characteristics of the dedifferentiation varying by the severity of hand dystonia. If these findings represent aberrant learning, then effective rehabilitation must incorporate the principles of neuroplasticity. Training must be individualized to each patient to rebalance the sensorimotor feedback loop and to restore normal fine motor control

    High-Frequency Oscillations in Distributed Neural Networks Reveal the Dynamics of Human Decision Making

    Get PDF
    We examine the relative timing of numerous brain regions involved in human decisions that are based on external criteria, learned information, personal preferences, or unconstrained internal considerations. Using magnetoencephalography (MEG) and advanced signal analysis techniques, we were able to non-invasively reconstruct oscillations of distributed neural networks in the high-gamma frequency band (60–150 Hz). The time course of the observed neural activity suggested that two-alternative forced choice tasks are processed in four overlapping stages: processing of sensory input, option evaluation, intention formation, and action execution. Visual areas are activated first, and show recurring activations throughout the entire decision process. The temporo-occipital junction and the intraparietal sulcus are active during evaluation of external values of the options, 250–500 ms after stimulus presentation. Simultaneously, personal preference is mediated by cortical midline structures. Subsequently, the posterior parietal and superior occipital cortices appear to encode intention, with different subregions being responsible for different types of choice. The cerebellum and inferior parietal cortex are recruited for internal generation of decisions and actions, when all options have the same value. Action execution was accompanied by activation peaks in the contralateral motor cortex. These results suggest that high-gamma oscillations as recorded by MEG allow a reliable reconstruction of decision processes with excellent spatiotemporal resolution

    Speech target modulates speaking induced suppression in auditory cortex

    Get PDF
    BACKGROUND: Previous magnetoencephalography (MEG) studies have demonstrated speaking-induced suppression (SIS) in the auditory cortex during vocalization tasks wherein the M100 response to a subject's own speaking is reduced compared to the response when they hear playback of their speech. RESULTS: The present MEG study investigated the effects of utterance rapidity and complexity on SIS: The greatest difference between speak and listen M100 amplitudes (i.e., most SIS) was found in the simple speech task. As the utterances became more rapid and complex, SIS was significantly reduced (p = 0.0003). CONCLUSION: These findings are highly consistent with our model of how auditory feedback is processed during speaking, where incoming feedback is compared with an efference-copy derived prediction of expected feedback. Thus, the results provide further insights about how speech motor output is controlled, as well as the computational role of auditory cortex in transforming auditory feedback
    corecore